60 research outputs found

    The Time Course of Segmentation and Cue-Selectivity in the Human Visual Cortex

    Get PDF
    Texture discontinuities are a fundamental cue by which the visual system segments objects from their background. The neural mechanisms supporting texture-based segmentation are therefore critical to visual perception and cognition. In the present experiment we employ an EEG source-imaging approach in order to study the time course of texture-based segmentation in the human brain. Visual Evoked Potentials were recorded to four types of stimuli in which periodic temporal modulation of a central 3° figure region could either support figure-ground segmentation, or have identical local texture modulations but not produce changes in global image segmentation. The image discontinuities were defined either by orientation or phase differences across image regions. Evoked responses to these four stimuli were analyzed both at the scalp and on the cortical surface in retinotopic and functional regions-of-interest (ROIs) defined separately using fMRI on a subject-by-subject basis. Texture segmentation (tsVEP: segmenting versus non-segmenting) and cue-specific (csVEP: orientation versus phase) responses exhibited distinctive patterns of activity. Alternations between uniform and segmented images produced highly asymmetric responses that were larger after transitions from the uniform to the segmented state. Texture modulations that signaled the appearance of a figure evoked a pattern of increased activity starting at ∼143 ms that was larger in V1 and LOC ROIs, relative to identical modulations that didn't signal figure-ground segmentation. This segmentation-related activity occurred after an initial response phase that did not depend on the global segmentation structure of the image. The two cue types evoked similar tsVEPs up to 230 ms when they differed in the V4 and LOC ROIs. The evolution of the response proceeded largely in the feed-forward direction, with only weak evidence for feedback-related activity

    Spatial and Temporal Dynamics of Attentional Guidance during Inefficient Visual Search

    Get PDF
    Spotting a prey or a predator is crucial in the natural environment and relies on the ability to extract quickly pertinent visual information. The experimental counterpart of this behavior is visual search (VS) where subjects have to identify a target amongst several distractors. In difficult VS tasks, it has been found that the reaction time (RT) is influenced by salience factors, such as the target-distractor similarity, and this finding is usually regarded as evidence for a guidance of attention by preattentive mechanisms. However, the use of RT measurements, a parameter which depends on multiple factors, allows only very indirect inferences about the underlying attentional mechanisms. The purpose of the present study was to determine the influence of salience factors on attentional guidance during VS, by measuring directly attentional allocation. We studied attention allocation by using a dual covert VS task in subjects who had 1) to detect a target amongst different items and 2) to report letters briefly flashed inside those items at different delays. As predicted, we showed that parallel processes guide attention towards the most relevant item by virtue of both goal-directed and stimulus-driven factors, and we demonstrated that this attentional selection is a prerequisite for target detection. In addition, we show that when the target is characterized by two features (conjunction VS), the goal-directed effects of both features are initially combined into a unique salience value, but at a later stage, grouping phenomena interact with the salience computation, and lead to the selection of a whole group of items. These results, in line with Guided Search Theory, show that efficient and rapid preattentive processes guide attention towards the most salient item, allowing to reduce the number of attentional shifts needed to find the target

    Predicted contextual modulation varies with distance from pinwheel centers in the orientation preference map

    Get PDF
    In the primary visual cortex (V1) of some mammals, columns of neurons with the full range of orientation preferences converge at the center of a pinwheel-like arrangement, the ‘pinwheel center' (PWC). Because a neuron receives abundant inputs from nearby neurons, the neuron's position on the cortical map likely has a significant impact on its responses to the layout of orientations inside and outside its classical receptive field (CRF). To understand the positional specificity of responses, we constructed a computational model based on orientation preference maps in monkey V1 and hypothetical neuronal connections. The model simulations showed that neurons near PWCs displayed weaker but detectable orientation selectivity within their CRFs, and strongly reduced contextual modulation from extra-CRF stimuli, than neurons distant from PWCs. We suggest that neurons near PWCs robustly extract local orientation within their CRF embedded in visual scenes, and that contextual information is processed in regions distant from PWCs

    Cortical Surround Interactions and Perceptual Salience via Natural Scene Statistics

    Get PDF
    Spatial context in images induces perceptual phenomena associated with salience and modulates the responses of neurons in primary visual cortex (V1). However, the computational and ecological principles underlying contextual effects are incompletely understood. We introduce a model of natural images that includes grouping and segmentation of neighboring features based on their joint statistics, and we interpret the firing rates of V1 neurons as performing optimal recognition in this model. We show that this leads to a substantial generalization of divisive normalization, a computation that is ubiquitous in many neural areas and systems. A main novelty in our model is that the influence of the context on a target stimulus is determined by their degree of statistical dependence. We optimized the parameters of the model on natural image patches, and then simulated neural and perceptual responses on stimuli used in classical experiments. The model reproduces some rich and complex response patterns observed in V1, such as the contrast dependence, orientation tuning and spatial asymmetry of surround suppression, while also allowing for surround facilitation under conditions of weak stimulation. It also mimics the perceptual salience produced by simple displays, and leads to readily testable predictions. Our results provide a principled account of orientation-based contextual modulation in early vision and its sensitivity to the homogeneity and spatial arrangement of inputs, and lends statistical support to the theory that V1 computes visual salience
    • …
    corecore